# Low-resource language models
Qwen2.5 1.5B TrSummarization Unsloth GGUF
Apache-2.0
A 4-bit quantized version fine-tuned based on the Qwen2.5-1.5B model, focusing on Turkish text generation and summarization tasks
Large Language Model Other
Q
anilguven
29
0
Tibert Base
This is a BERT base model pretrained specifically for Tigrinya, trained for 40 epochs on a dataset of 40 million tokens.
Large Language Model Other
T
fgaim
28
1
Xlm Roberta Base Finetuned Ner Wolof
A token classification model for Wolof named entity recognition (NER) tasks, fine-tuned on the Wolof portion of the MasakhaNER dataset based on xlm-roberta-base
Sequence Labeling
Transformers Other

X
mbeukman
49
0
Xlm Roberta Base Finetuned Swahili Finetuned Ner Swahili
This model is fine-tuned on the Swahili portion of the MasakhaNER dataset for named entity recognition tasks in Swahili text.
Sequence Labeling
Transformers Other

X
mbeukman
14
1
Mt5 Base Yoruba Adr
A Yoruba automatic tone restoration model fine-tuned on mT5-base, trained on JW300 and Menyo-20k datasets
Machine Translation
Transformers

M
Davlan
14
0
Featured Recommended AI Models